11 research outputs found

    NEMO: Need-inspired Emotional Expressions within a Task-independent Framework

    Full text link
    This paper presents the underlying algorithms of an emotion model within a task-independent framework. This model, called NEMO is a task independent model that integrates a module of needs for emotional expressions. We suggest that appraisals can be confined within various scopes of needs. In other words, the emotion framework allows control over appraisals based on a set of pre-defined levels of needs. This way, the agent is able to sort out his priorities, and express emotions according to his needs. The definitions of the needs and appraisals concepts along with their computations are presented to demonstrate their relations with the emotion generation mechanism in a multi-tasking environment of an autonomous emotive agent

    NEMOHIFI: An Affective HiFi Agent

    Get PDF
    This demo concerns a recently developed prototype of an emotionally-sensitive autonomous HiFi Spoken Conversa- tional Agent, called NEMOHIFI. The baseline agent was developed by the Speech Technology Group (GTH) and has recently been integrated with an emotional engine called NEMO (Need-inspired Emotional Model) to enable it to adapt to users emotion and respond to the users using ap- propriate expressive speech. NEMOHIFI controls and man- ages the HiFi audio system, and for end users, its functions equate a remote control, except that instead of clicking, the user interacts with the agent using voice. A pairwise com- parison between the baseline (non-adaptive) and NEMO- HIFI is also presented

    Evaluation of a User-Adapted Spoken Language Dialogue System: Measuring the Relevance of the Contextual Information Sources

    Get PDF
    We present an evaluation of a spoken language dialogue system with a module for the management of userrelated information, stored as user preferences and privileges. The flexibility of our dialogue management approach, based on Bayesian Networks (BN), together with a contextual information module, which performs different strategies for handling such information, allows us to include user information as a new level into the Context Manager hierarchy. We propose a set of objective and subjective metrics to measure the relevance of the different contextual information sources. The analysis of our evaluation scenarios shows that the relevance of the short-term information (i.e. the system status) remains pretty stable throughout the dialogue, whereas the dialogue history and the user profile (i.e. the middle-term and the long-term information, respectively) play a complementary role, evolving their usefulness as the dialogue evolves

    Expressive Speech Identifications based on Hidden Markov Model

    Get PDF
    This paper concerns a sub-area of a larger research field of Affective Computing, focusing on the employment of affect-recognition systems using speech modality. It is proposed that speech-based affect identification systems could play an important role as next generation biometric identification systems that are aimed at determining a person’s ‘state of mind’, or psycho-physiological state. The possible areas for the deployment of voice-affect recognition technology are discussed. Additionally, the experiments and results for emotion identification in speech based on a Hidden Markov Models (HMMs) classifier are also presented. The result from experiment suggests that certain speech feature is more precise to identify certain emotional state, and that happiness is the most difficult emotion to detect

    Sistema de traducción de lenguaje SMS a castellano

    Get PDF
    En este artículo se describe el proceso llevado a cabo para desarrollar un sistema de traducción de lenguaje SMS (Short Message Service) a castellano. En primer lugar, se genera una base de datos necesaria para desarrollar el sistema, formada por más de 11000 términos y expresiones en lenguaje SMS y sus traducciones al castellano, así como frases de ejemplo en lenguaje SMS para realizar una primera evaluación del sistema. La arquitectura completa está formada por un traductor automático estadístico basado en subfrases o secuencias de palabras y una serie de funciones implementadas para actuar sobre las frases en tiempo real. La evaluación de la arquitectura se realiza con las siguientes métricas: WER (tasa de error de palabras), BLEU (“BiLingual Evaluation Understudy”) y NIST. Como resultado final, se obtiene una tasa de error de palabra de 20,2% para el mejor experimento

    Advanced Speech Communication System for Deaf People

    Get PDF
    This paper describes the development of an Advanced Speech Communication System for Deaf People and its field evaluation in a real application domain: the renewal of Driver’s License. The system is composed of two modules. The first one is a Spanish into Spanish Sign Language (LSE: Lengua de Signos Española) translation module made up of a speech recognizer, a natural language translator (for converting a word sequence into a sequence of signs), and a 3D avatar animation module (for playing back the signs). The second module is a Spoken Spanish generator from sign writing composed of a visual interface (for specifying a sequence of signs), a language translator (for generating the sequence of words in Spanish), and finally, a text to speech converter. For language translation, the system integrates three technologies: an example based strategy, a rule based translation method and a statistical translator. This paper also includes a detailed description of the evaluation carried out in the Local Traffic Office in the city of Toledo (Spain) involving real government employees and deaf people. This evaluation includes objective measurements from the system and subjective information from questionnaire

    Source Language Categorization for improving a Speech into Sign Language Translation System

    Get PDF
    This paper describes a categorization module for improving the performance of a Spanish into Spanish Sign Language (LSE) translation system. This categorization module replaces Spanish words with associated tags. When implementing this module, several alternatives for dealing with non-relevant words have been studied. Non-relevant words are Spanish words not relevant in the translation process. The categorization module has been incorporated into a phrase-based system and a Statistical Finite State Transducer (SFST). The evaluation results reveal that the BLEU has increased from 69.11% to 78.79% for the phrase-based system and from 69.84% to 75.59% for the SFST

    Games based learning for Exploring Cultural Conflict

    Get PDF
    In this paper we discuss the early stage design of MIXER, a technology enhance educational application focused at supporting children in learning about cultural conflict, achieved through the use of a game with an effective embodied AI agent. MIXER is being developed re-using existing technology applied to a different context and purpose with the aim of creating an educational and enjoyable experience for 9-11 year olds. This paper outlines MIXER’s underpinning technology and theory. It presents early stage design and development, highlighting current research directions

    Acoustic Emotion Recognition using Dynamic Bayesian Networks and Multi-Space Distributions

    Full text link
    In this paper we describe the acoustic emotion recognition system built at the Speech Technology Group of the Universidad Politecnica de Madrid (Spain) to participate in the INTERSPEECH 2009 Emotion Challenge. Our proposal is based on the use of a Dynamic Bayesian Network (DBN) to deal with the temporal modelling of the emotional speech information. The selected features (MFCC, F0, Energy and their variants) are modelled as different streams, and the F0 related ones are integrated under a Multi Space Distribution (MSD) framework, to properly model its dual nature (voiced/unvoiced). Experimental evaluation on the challenge test set, show a 67.06%and 38.24% of unweighted recall for the 2 and 5-classes tasks respectively. In the 2-class case, we achieve similar results compared with the baseline, with a considerable less number of features. In the 5-class case, we achieve a statistically significant 6.5% relative improvemen

    User-centric need-driven affect modeling for spoken conversational agents: design and evaluation

    Full text link
    It is easy to get frustrated at spoken conversational agents (SCAs), perhaps because they seem to be callous. By and large, the quality of human-computer interaction is affected due to the inability of the SCAs to recognise and adapt to user emotional state. Now with the mass appeal of artificially-mediated communication, there has been an increasing need for SCAs to be socially and emotionally intelligent, that is, to infer and adapt to their human interlocutors’ emotions on the fly, in order to ascertain an affective, empathetic and naturalistic interaction. An enhanced quality of interaction would reduce users’ frustrations and consequently increase their satisfactions. These reasons have motivated the development of SCAs towards including socio-emotional elements, turning them into affective and socially-sensitive interfaces. One barrier to the creation of such interfaces has been the lack of methods for modelling emotions in a task-independent environment. Most emotion models for spoken dialog systems are task-dependent and thus cannot be used “as-is” in different applications. This Thesis focuses on improving this, in which it concerns computational modeling of emotion, personality and their interrelationship for task-independent autonomous SCAs. The generation of emotion is driven by needs, inspired by human’s motivational systems. The work in this Thesis is organised in three stages, each one with its own contribution. The first stage involved defining, integrating and quantifying the psychological-based motivational and emotional models sourced from. Later these were transformed into a computational model by implementing them into software entities. The computational model was then incorporated and put to test with an existing SCA host, a HiFi-control agent. The second stage concerned automatic prediction of affect, which has been the main challenge towards the greater aim of infusing social intelligence into the HiFi agent. In recent years, studies on affect detection from voice have moved on to using realistic, non-acted data, which is subtler. However, it is more challenging to perceive subtler emotions and this is demonstrated in tasks such as labelling and machine prediction. In this stage, we attempted to address part of this challenge by considering the roles of user satisfaction ratings and conversational/dialog features as the respective target and predictors in discriminating contentment and frustration, two types of emotions that are known to be prevalent within spoken human-computer interaction. The final stage concerned the evaluation of the emotional model through the HiFi agent. A series of user studies with 70 subjects were conducted in a real-time environment, each in a different phase and with its own conditions. All the studies involved the comparisons between the baseline non-modified and the modified agent. The findings have gone some way towards enhancing our understanding of the utility of emotion in spoken dialog systems in several ways; first, an SCA should not express its emotions blindly, albeit positive. Rather, it should adapt its emotions to user states. Second, low performance in an SCA may be compensated by the exploitation of emotion. Third, the expression of emotion through the exploitation of prosody could better improve users’ perceptions of an SCA compared to exploiting emotions through just lexical contents. Taken together, these findings not only support the success of the emotional model, but also provide substantial evidences with respect to the benefits of adding emotion in an SCA, especially in mitigating users’ frustrations and ultimately improving their satisfactions. Resumen Es relativamente fácil experimentar cierta frustración al interaccionar con agentes conversacionales (Spoken Conversational Agents, SCA), a menudo porque parecen ser un poco insensibles. En general, la calidad de la interacción persona-agente se ve en cierto modo afectada por la incapacidad de los SCAs para identificar y adaptarse al estado emocional de sus usuarios. Actualmente, y debido al creciente atractivo e interés de dichos agentes, surge la necesidad de hacer de los SCAs unos seres cada vez más sociales y emocionalmente inteligentes, es decir, con capacidad para inferir y adaptarse a las emociones de sus interlocutores humanos sobre la marcha, de modo que la interacción resulte más afectiva, empática y, en definitiva, natural. Una interacción mejorada en este sentido permitiría reducir la posible frustración de los usuarios y, en consecuencia, mejorar el nivel de satisfacción alcanzado por los mismos. Estos argumentos justifican y motivan el desarrollo de nuevos SCAs con capacidades socio-emocionales, dotados de interfaces afectivas y socialmente sensibles. Una de las barreras para la creación de tales interfaces ha sido la falta de métodos de modelado de emociones en entornos independientes de tarea. La mayoría de los modelos emocionales empleados por los sistemas de diálogo hablado actuales son dependientes de tarea y, por tanto, no pueden utilizarse "tal cual" en diferentes dominios o aplicaciones. Esta tesis se centra precisamente en la mejora de este aspecto, la definición de modelos computacionales de las emociones, la personalidad y su interrelación para SCAs autónomos e independientes de tarea. Inspirada en los sistemas motivacionales humanos en el ámbito de la psicología, la tesis propone un modelo de generación/producción de la emoción basado en necesidades. El trabajo realizado en la presente tesis está organizado en tres etapas diferenciadas, cada una con su propia contribución. La primera etapa incluyó la definición, integración y cuantificación de los modelos motivacionales de partida y de los modelos emocionales derivados a partir de éstos. Posteriormente, dichos modelos emocionales fueron plasmados en un modelo computacional mediante su implementación software. Este modelo computacional fue incorporado y probado en un SCA anfitrión ya existente, un agente con capacidad para controlar un equipo HiFi, de alta fidelidad. La segunda etapa se orientó hacia el reconocimiento automático de la emoción, aspecto que ha constituido el principal desafío en relación al objetivo mayor de infundir inteligencia social en el agente HiFi. En los últimos años, los estudios sobre reconocimiento de emociones a partir de la voz han pasado de emplear datos actuados a usar datos reales en los que la presencia u observación de emociones se produce de una manera mucho más sutil. El reconocimiento de emociones bajo estas condiciones resulta mucho más complicado y esta dificultad se pone de manifiesto en tareas tales como el etiquetado y el aprendizaje automático. En esta etapa, se abordó el problema del reconocimiento de las emociones del usuario a partir de características o métricas derivadas del propio diálogo usuario-agente. Gracias a dichas métricas, empleadas como predictores o indicadores del grado o nivel de satisfacción alcanzado por el usuario, fue posible discriminar entre satisfacción y frustración, las dos emociones prevalentes durante la interacción usuario-agente. La etapa final corresponde fundamentalmente a la evaluación del modelo emocional por medio del agente Hifi. Con ese propósito se llevó a cabo una serie de estudios con usuarios reales, 70 sujetos, interaccionando con diferentes versiones del agente Hifi en tiempo real, cada uno en una fase diferente y con sus propias características o capacidades emocionales. En particular, todos los estudios realizados han profundizado en la comparación entre una versión de referencia del agente no dotada de ningún comportamiento o característica emocional, y una versión del agente modificada convenientemente con el modelo emocional propuesto. Los resultados obtenidos nos han permitido comprender y valorar mejor la utilidad de las emociones en los sistemas de diálogo hablado. Dicha utilidad depende de varios aspectos. En primer lugar, un SCA no debe expresar sus emociones a ciegas o arbitrariamente, incluso aunque éstas sean positivas. Más bien, debe adaptar sus emociones a los diferentes estados de los usuarios. En segundo lugar, un funcionamiento relativamente pobre por parte de un SCA podría compensarse, en cierto modo, dotando al SCA de comportamiento y capacidades emocionales. En tercer lugar, aprovechar la prosodia como vehículo para expresar las emociones, de manera complementaria al empleo de mensajes con un contenido emocional específico tanto desde el punto de vista léxico como semántico, ayuda a mejorar la percepción por parte de los usuarios de un SCA. Tomados en conjunto, los resultados alcanzados no sólo confirman el éxito del modelo emocional, sino xv que constituyen además una evidencia decisiva con respecto a los beneficios de incorporar emociones en un SCA, especialmente en cuanto a reducir el nivel de frustración de los usuarios y, en última instancia, mejorar su satisfacción
    corecore